128 research outputs found
Low-complexity computation of plate eigenmodes with Vekua approximations and the Method of Particular Solutions
This paper extends the Method of Particular Solutions (MPS) to the
computation of eigenfrequencies and eigenmodes of plates. Specific
approximation schemes are developed, with plane waves (MPS-PW) or
Fourier-Bessel functions (MPS-FB). This framework also requires a suitable
formulation of the boundary conditions. Numerical tests, on two plates with
various boundary conditions, demonstrate that the proposed approach provides
competitive results with standard numerical schemes such as the Finite Element
Method, at reduced complexity, and with large flexibility in the implementation
choices
Blind calibration for compressed sensing by convex optimization
We consider the problem of calibrating a compressed sensing measurement
system under the assumption that the decalibration consists in unknown gains on
each measure. We focus on {\em blind} calibration, using measures performed on
a few unknown (but sparse) signals. A naive formulation of this blind
calibration problem, using minimization, is reminiscent of blind
source separation and dictionary learning, which are known to be highly
non-convex and riddled with local minima. In the considered context, we show
that in fact this formulation can be exactly expressed as a convex optimization
problem, and can be solved using off-the-shelf algorithms. Numerical
simulations demonstrate the effectiveness of the approach even for highly
uncalibrated measures, when a sufficient number of (unknown, but sparse)
calibrating signals is provided. We observe that the success/failure of the
approach seems to obey sharp phase transitions
Nearfield Acoustic Holography using sparsity and compressive sampling principles
Regularization of the inverse problem is a complex issue when using
Near-field Acoustic Holography (NAH) techniques to identify the vibrating
sources. This paper shows that, for convex homogeneous plates with arbitrary
boundary conditions, new regularization schemes can be developed, based on the
sparsity of the normal velocity of the plate in a well-designed basis, i.e. the
possibility to approximate it as a weighted sum of few elementary basis
functions. In particular, these new techniques can handle discontinuities of
the velocity field at the boundaries, which can be problematic with standard
techniques. This comes at the cost of a higher computational complexity to
solve the associated optimization problem, though it remains easily tractable
with out-of-the-box software. Furthermore, this sparsity framework allows us to
take advantage of the concept of Compressive Sampling: under some conditions on
the sampling process (here, the design of a random array, which can be
numerically and experimentally validated), it is possible to reconstruct the
sparse signals with significantly less measurements (i.e., microphones) than
classically required. After introducing the different concepts, this paper
presents numerical and experimental results of NAH with two plate geometries,
and compares the advantages and limitations of these sparsity-based techniques
over standard Tikhonov regularization.Comment: Journal of the Acoustical Society of America (2012
Optimizing Source and Sensor Placement for Sound Field Control: An Overview
International audienceIn order to control an acoustic field inside a target region, it is important to choose suitable positions of secondary sources (loudspeakers) and sensors (control points/microphones). This paper provides an overview of state-of-the-art source and sensor placement methods in sound field control. Although the placement of both sources and sensors greatly affects control accuracy and filter stability, their joint optimization has not been thoroughly investigated in the acoustics literature. In this context, we reformulate five general source and/or sensor placement methods that can be applied for sound field control. We compare the performance of these methods through extensive numerical simulations in both narrowband and broadband scenarios. Index Terms-source and sensor placement, sound field control , sound field reproduction, subset selection, interpolation
Sparsity-based localization of spatially coherent distributed sources
International audienceIn this paper, the localization of spatially distributed sources is considered. Based on the problem formulation of the De-convolution Approach for the Mapping of Acoustic Sources (DAMAS), a criterion based on a convex optimization under sparsity constraint is proposed to locate the sources. Also an original method is given to recover the angular distributions and the power of the sources. Simulations executed in the scenario of a mixture of distributed and point sources illustrate the validation of the proposed approach compared to other methods
Imaging With Nature: Compressive Imaging Using a Multiply Scattering Medium
The recent theory of compressive sensing leverages upon the structure of
signals to acquire them with much fewer measurements than was previously
thought necessary, and certainly well below the traditional Nyquist-Shannon
sampling rate. However, most implementations developed to take advantage of
this framework revolve around controlling the measurements with carefully
engineered material or acquisition sequences. Instead, we use the natural
randomness of wave propagation through multiply scattering media as an optimal
and instantaneous compressive imaging mechanism. Waves reflected from an object
are detected after propagation through a well-characterized complex medium.
Each local measurement thus contains global information about the object,
yielding a purely analog compressive sensing method. We experimentally
demonstrate the effectiveness of the proposed approach for optical imaging by
using a 300-micrometer thick layer of white paint as the compressive imaging
device. Scattering media are thus promising candidates for designing efficient
and compact compressive imagers.Comment: 17 pages, 8 figure
Comprehensive space-time hydrometeorological simulations for estimating very rare floods at multiple sites in a large river basin
Estimates for rare to very rare floods are limited by the relatively short streamflow records available. Often, pragmatic conversion factors are used to quantify such events based on extrapolated observations, or simplifying assumptions are made about extreme precipitation and resulting flood peaks. Continuous simulation (CS) is an alternative approach that better links flood estimation with physical processes and avoids assumptions about antecedent conditions. However, long-term CS has hardly been implemented to estimate rare floods (i.e. return periods considerably larger than 100 years) at multiple sites in a large river basin to date. Here we explore the feasibility and reliability of the CS approach for 19 sites in the Aare River basin in Switzerland (area: 17 700 km2) with exceedingly long simulations in a hydrometeorological model chain. The chain starts with a multi-site stochastic weather generator used to generate 30 realizations of hourly precipitation and temperature scenarios of 10 000 years each. These realizations were then run through a bucket-type hydrological model for 80 sub-catchments and finally routed downstream with a simplified representation of main river channels, major lakes and relevant floodplains in a hydrologic routing system. Comprehensive evaluation over different temporal and spatial scales showed that the main features of the meteorological and hydrological observations are well represented and that meaningful information on low-probability floods can be inferred. Although uncertainties are still considerable, the explicit consideration of important processes of flood generation and routing (snow accumulation, snowmelt, soil moisture storage, bank overflow, lake and floodplain retention) is a substantial advantage. The approach allows for comprehensively exploring possible but unobserved spatial and temporal patterns of hydrometeorological behaviour. This is of particular value in a large river basin where the complex interaction of flows from individual tributaries and lake regulations are typically not well represented in the streamflow observations. The framework is also suitable for estimating more frequent floods, as often required in engineering and hazard mapping
Comprehensive space–time hydrometeorological simulations for estimating very rare floods at multiple sites in a large river basin
Estimates for rare to very rare floods are limited by the relatively short streamflow records available. Often, pragmatic conversion factors are used to quantify such events based on extrapolated observations, or simplifying assumptions are made about extreme precipitation and resulting flood peaks. Continuous simulation (CS) is an alternative approach that better links flood estimation with physical processes and avoids assumptions about antecedent conditions. However, long-term CS has hardly been implemented to estimate rare floods (i.e. return periods considerably larger than 100 years) at multiple sites in a large river basin to date. Here we explore the feasibility and reliability of the CS approach for 19 sites in the Aare River basin in Switzerland (area: 17 700 km2) with exceedingly long simulations in a hydrometeorological model chain. The chain starts with a multi-site stochastic weather generator used to generate 30 realizations of hourly precipitation and temperature scenarios of 10 000 years each. These realizations were then run through a bucket-type hydrological model for 80 sub-catchments and finally routed downstream with a simplified representation of main river channels, major lakes and relevant floodplains in a hydrologic routing system. Comprehensive evaluation over different temporal and spatial scales showed that the main features of the meteorological and hydrological observations are well represented and that meaningful information on low-probability floods can be inferred. Although uncertainties are still considerable, the explicit consideration of important processes of flood generation and routing (snow accumulation, snowmelt, soil moisture storage, bank overflow, lake and floodplain retention) is a substantial advantage. The approach allows for comprehensively exploring possible but unobserved spatial and temporal patterns of hydrometeorological behaviour. This is of particular value in a large river basin where the complex interaction of flows from individual tributaries and lake regulations are typically not well represented in the streamflow observations. The framework is also suitable for estimating more frequent floods, as often required in engineering and hazard mapping
- …